Current Issue : July - September Volume : 2018 Issue Number : 3 Articles : 5 Articles
Objectives. To assess the influence of RAGT on balance, coordination, and functional independence in activities of daily living of\nchronic stroke survivors with ataxia at least one year of injury. Methods. It was a randomized controlled trial. The patients were\nallocated to either therapist-assisted gait training (TAGT) or robotic-assisted gait training (RAGT). Both groups received 3\nweekly sessions of physiotherapy with an estimated duration of 60 minutes each and prescribed home exercises. The following\noutcome measures were evaluated prior to and after the completion of the 5-month protocol treatment: BBS, TUG test, FIM,\nand SARA. For intragroup comparisons, the Wilcoxon test was used, and the Mannââ?¬â??Whitney test was used for between-group\ncomparison. Results. Nineteen stroke survivors with ataxia sequel after one year of injury were recruited. Both groups showed\nstatistically significant improvement (P < 0 05) in balance, functional independencein, and general ataxia symptoms. There were\nno statistically significant differences (P < 0 05) for between-group comparisons both at baseline and after completion of the\nprotocol. Conclusions. Chronic stroke patients with ataxia had significant improvements in balance and independence in\nactivities of daily living after RAGT along with conventional therapy and home exercises. This trial was registered with trial\nregistration number 39862414.6.0000.5505....
Periacetabular osteotomy (PAO) is a complex surgical procedure to restore acetabular coverage in the dysplastic hip, and the\namount of acetabular rotation during PAO plays a key role. Using computational simulations, this study assessed the optimal\ndirection and amount of the acetabular rotation in three dimensions for a patient undergoing PAO. Anatomy-specific finite\nelement (FE) models of the hip were constructed based on clinical CT images. The calculated acetabular rotation during PAO\nwere 9.7�°, 18 �° , and 4.3�° in sagittal, coronal, and transverse planes, respectively. Based on the actual acetabular rotations, twelve\npostoperative FE models were generated. An optimal position was found by gradually varying the amount of the acetabular\nrotations in each anatomical plane. The coronal plane was found to be the principal rotational plane, which showed the\nstrongest effects on joint contact pressure compared to other planes. It is suggested that rotation in the coronal plane of the\nosteotomized acetabulum is one of the primary surgical parameters to achieve the optimal clinical outcome for a given patient....
The focus of this research is to analyse both human hand motion and force, during eating, with respect to differing food\ncharacteristics and cutlery (including a fork and a spoon). A glove consisting of bend and force sensors has been used to capture\nthe motion and contact force exerted by fingers during different eating activities. The Pearson correlation coefficient has been\nused to show that a significant linear relationship exists between the bending motion of the fingers and the forces exerted during\neating. Analysis of variance (ANOVA) and independent samples t-tests are performed to establish whether the motion and\nforce exerted by the fingers while eating is influenced by the different food characteristics and cutlery. The middle finger motion\nshowed the least positive correlation with index fingertip and thumb-tip force, irrespective of the food characteristics and cutlery\nused. The ANOVA and t-test results revealed that bending motion of the index finger and thumb varies with respect to differing\nfood characteristics and the type of cutlery used (fork/spoon), whereas the bending motion of the middle finger remains\nunaffected. Additionally, the contact forces exerted by the thumb tip and index fingertip remain unaffected with respect to\ndiffering food types and cutlery used....
A widely discussed paradigm for brain-computer interface (BCI) is themotor imagery task using noninvasive electroencephalography\n(EEG) modality. It often requires long training session for collecting a large amount of EEG data which makes user exhausted.\nOne of the approaches to shorten this session is utilizing the instances from past users to train the learner for the novel user. In this\nwork, direct transferring from past users is investigated and applied to multiclass motor imagery BCI. Then, active learning (AL)\ndriven informative instance transfer learning has been attempted for multiclass BCI. Informative instance transfer shows better\nperformance than direct instance transfer which reaches the benchmark using a reduced amount of training data (49% less) in\ncases of 6 out of 9 subjects. However, none of these methods has superior performance for all subjects in general. To get a generic\ntransfer learning framework for BCI, an optimal ensemble of informative and direct transfer methods is designed and applied.The\noptimized ensemble outperforms both direct and informative transfer method for all subjects except one in BCI competition IV\nmulticlass motor imagery dataset. It achieves the benchmark performance for 8 out of 9 subjects using average 75% less training\ndata.Thus, the requirement of large training data for the new user is reduced to a significant amount....
Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on\nutilizing handcrafted features which are problem-dependent and optimal for specific tasks.Moreover, they are highly susceptible to\ndynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature\nlearning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically\nwithout the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow\nand three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and\nhierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform\nwith varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset.\nThe comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance\nevaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated\nthat the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%.\nS-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an\naverage accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM�s training time takes 445 seconds. Learning in\nS-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU)....
Loading....